5,745 research outputs found

    TIGER: A Tuning-Insensitive Approach for Optimally Estimating Gaussian Graphical Models

    Full text link
    We propose a new procedure for estimating high dimensional Gaussian graphical models. Our approach is asymptotically tuning-free and non-asymptotically tuning-insensitive: it requires very few efforts to choose the tuning parameter in finite sample settings. Computationally, our procedure is significantly faster than existing methods due to its tuning-insensitive property. Theoretically, the obtained estimator is simultaneously minimax optimal for precision matrix estimation under different norms. Empirically, we illustrate the advantages of our method using thorough simulated and real examples. The R package bigmatrix implementing the proposed methods is available on the Comprehensive R Archive Network: http://cran.r-project.org/

    Adaptive variance function estimation in heteroscedastic nonparametric regression

    Get PDF
    We consider a wavelet thresholding approach to adaptive variance function estimation in heteroscedastic nonparametric regression. A data-driven estimator is constructed by applying wavelet thresholding to the squared first-order differences of the observations. We show that the variance function estimator is nearly optimally adaptive to the smoothness of both the mean and variance functions. The estimator is shown to achieve the optimal adaptive rate of convergence under the pointwise squared error simultaneously over a range of smoothness classes. The estimator is also adaptively within a logarithmic factor of the minimax risk under the global mean integrated squared error over a collection of spatially inhomogeneous function classes. Numerical implementation and simulation results are also discussed.Comment: Published in at http://dx.doi.org/10.1214/07-AOS509 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Empirical information on nuclear matter fourth-order symmetry energy from an extended nuclear mass formula

    Full text link
    We establish a relation between the equation of state (EOS) of nuclear matter and the fourth-order symmetry energy asym,4(A)a_{\rm{sym,4}}(A) of finite nuclei in a semi-empirical nuclear mass formula by self-consistently considering the bulk, surface and Coulomb contributions to the nuclear mass. Such a relation allows us to extract information on nuclear matter fourth-order symmetry energy Esym,4(ρ0)E_{\rm{sym,4}}(\rho_0) at normal nuclear density ρ0\rho_0 from analyzing nuclear mass data. Based on the recent precise extraction of asym,4(A)a_{\rm{sym,4}}(A) via the double difference of the "experimental" symmetry energy extracted from nuclear masses, for the first time, we estimate a value of Esym,4(ρ0)=20.0±4.6E_{\rm{sym,4}}(\rho_0) = 20.0\pm4.6 MeV. Such a value of Esym,4(ρ0)E_{\rm{sym,4}}(\rho_0) is significantly larger than the predictions from mean-field models and thus suggests the importance of considering the effects of beyond the mean-field approximation in nuclear matter calculations.Comment: 7 pages, 1 figure. Presentation improved and discussions added. Accepted version to appear in PL

    Calibrated Multivariate Regression with Application to Neural Semantic Basis Discovery

    Full text link
    We propose a calibrated multivariate regression method named CMR for fitting high dimensional multivariate regression models. Compared with existing methods, CMR calibrates regularization for each regression task with respect to its noise level so that it simultaneously attains improved finite-sample performance and tuning insensitiveness. Theoretically, we provide sufficient conditions under which CMR achieves the optimal rate of convergence in parameter estimation. Computationally, we propose an efficient smoothed proximal gradient algorithm with a worst-case numerical rate of convergence \cO(1/\epsilon), where Ο΅\epsilon is a pre-specified accuracy of the objective function value. We conduct thorough numerical simulations to illustrate that CMR consistently outperforms other high dimensional multivariate regression methods. We also apply CMR to solve a brain activity prediction problem and find that it is as competitive as a handcrafted model created by human experts. The R package \texttt{camel} implementing the proposed method is available on the Comprehensive R Archive Network \url{http://cran.r-project.org/web/packages/camel/}.Comment: Journal of Machine Learning Research, 201

    Pivotal estimation via square-root Lasso in nonparametric regression

    Get PDF
    We propose a self-tuning Lasso\sqrt{\mathrm {Lasso}} method that simultaneously resolves three important practical problems in high-dimensional regression analysis, namely it handles the unknown scale, heteroscedasticity and (drastic) non-Gaussianity of the noise. In addition, our analysis allows for badly behaved designs, for example, perfectly collinear regressors, and generates sharp bounds even in extreme cases, such as the infinite variance case and the noiseless case, in contrast to Lasso. We establish various nonasymptotic bounds for Lasso\sqrt{\mathrm {Lasso}} including prediction norm rate and sparsity. Our analysis is based on new impact factors that are tailored for bounding prediction norm. In order to cover heteroscedastic non-Gaussian noise, we rely on moderate deviation theory for self-normalized sums to achieve Gaussian-like results under weak conditions. Moreover, we derive bounds on the performance of ordinary least square (ols) applied to the model selected by Lasso\sqrt{\mathrm {Lasso}} accounting for possible misspecification of the selected model. Under mild conditions, the rate of convergence of ols post Lasso\sqrt{\mathrm {Lasso}} is as good as Lasso\sqrt{\mathrm {Lasso}}'s rate. As an application, we consider the use of Lasso\sqrt{\mathrm {Lasso}} and ols post Lasso\sqrt{\mathrm {Lasso}} as estimators of nuisance parameters in a generic semiparametric problem (nonlinear moment condition or ZZ-problem), resulting in a construction of n\sqrt{n}-consistent and asymptotically normal estimators of the main parameters.Comment: Published in at http://dx.doi.org/10.1214/14-AOS1204 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Square-Root Lasso: Pivotal Recovery of Sparse Signals via Conic Programming

    Get PDF
    We propose a pivotal method for estimating high-dimensional sparse linear regression models, where the overall number of regressors pp is large, possibly much larger than nn, but only ss regressors are significant. The method is a modification of the lasso, called the square-root lasso. The method is pivotal in that it neither relies on the knowledge of the standard deviation Οƒ\sigma or nor does it need to pre-estimate Οƒ\sigma. Moreover, the method does not rely on normality or sub-Gaussianity of noise. It achieves near-oracle performance, attaining the convergence rate Οƒ{(s/n)log⁑p}1/2\sigma \{(s/n)\log p\}^{1/2} in the prediction norm, and thus matching the performance of the lasso with known Οƒ\sigma. These performance results are valid for both Gaussian and non-Gaussian errors, under some mild moment restrictions. We formulate the square-root lasso as a solution to a convex conic programming problem, which allows us to implement the estimator using efficient algorithmic methods, such as interior-point and first-order methods

    New Bounds for Restricted Isometry Constants

    Get PDF
    In this paper we show that if the restricted isometry constant Ξ΄k\delta_k of the compressed sensing matrix satisfies Ξ΄k<0.307, \delta_k < 0.307, then kk-sparse signals are guaranteed to be recovered exactly via β„“1\ell_1 minimization when no noise is present and kk-sparse signals can be estimated stably in the noisy case. It is also shown that the bound cannot be substantively improved. An explicitly example is constructed in which Ξ΄k=kβˆ’12kβˆ’1<0.5\delta_{k}=\frac{k-1}{2k-1} < 0.5, but it is impossible to recover certain kk-sparse signals
    • …
    corecore